与其2D图像对应物相比,3D点云数据上的零射击学习是一个相关的未置换问题。 3D数据由于不可用的预训练特征提取模型而带来了ZSL的新挑战。为了解决这个问题,我们提出了一种及时引导的3D场景生成和监督方法,该方法可以增强3D数据以更好地学习网络,从而探索可见和看不见的对象的复杂相互作用。首先,我们以提示描述的某些方式合并了两个3D模型的点云。提示的行为就像描述每个3D场景的注释一样。后来,我们进行对比学习,以端到端的方式培训我们所提出的建筑。我们认为,与单​​个对象相比,3D场景可以更有效地关联对象,因为当对象出现在上下文中时,流行的语言模型(如Bert)可以实现高性能。我们提出的及时引导场景生成方法封装了数据扩展和基于及时的注释/字幕,以提高3D ZSL性能。我们已经在合成(ModelNet40,ModelNet10)和实扫描(ScanoJbectnn)3D对象数据集上实现了最新的ZSL和广义ZSL性能。
translated by 谷歌翻译
We study the relationship between adversarial robustness and differential privacy in high-dimensional algorithmic statistics. We give the first black-box reduction from privacy to robustness which can produce private estimators with optimal tradeoffs among sample complexity, accuracy, and privacy for a wide range of fundamental high-dimensional parameter estimation problems, including mean and covariance estimation. We show that this reduction can be implemented in polynomial time in some important special cases. In particular, using nearly-optimal polynomial-time robust estimators for the mean and covariance of high-dimensional Gaussians which are based on the Sum-of-Squares method, we design the first polynomial-time private estimators for these problems with nearly-optimal samples-accuracy-privacy tradeoffs. Our algorithms are also robust to a constant fraction of adversarially-corrupted samples.
translated by 谷歌翻译
The classification of sleep stages plays a crucial role in understanding and diagnosing sleep pathophysiology. Sleep stage scoring relies heavily on visual inspection by an expert that is time consuming and subjective procedure. Recently, deep learning neural network approaches have been leveraged to develop a generalized automated sleep staging and account for shifts in distributions that may be caused by inherent inter/intra-subject variability, heterogeneity across datasets, and different recording environments. However, these networks ignore the connections among brain regions, and disregard the sequential connections between temporally adjacent sleep epochs. To address these issues, this work proposes an adaptive product graph learning-based graph convolutional network, named ProductGraphSleepNet, for learning joint spatio-temporal graphs along with a bidirectional gated recurrent unit and a modified graph attention network to capture the attentive dynamics of sleep stage transitions. Evaluation on two public databases: the Montreal Archive of Sleep Studies (MASS) SS3; and the SleepEDF, which contain full night polysomnography recordings of 62 and 20 healthy subjects, respectively, demonstrates performance comparable to the state-of-the-art (Accuracy: 0.867;0.838, F1-score: 0.818;0.774 and Kappa: 0.802;0.775, on each database respectively). More importantly, the proposed network makes it possible for clinicians to comprehend and interpret the learned connectivity graphs for sleep stages.
translated by 谷歌翻译
A major challenge in machine learning is resilience to out-of-distribution data, that is data that exists outside of the distribution of a model's training data. Training is often performed using limited, carefully curated datasets and so when a model is deployed there is often a significant distribution shift as edge cases and anomalies not included in the training data are encountered. To address this, we propose the Input Optimisation Network, an image preprocessing model that learns to optimise input data for a specific target vision model. In this work we investigate several out-of-distribution scenarios in the context of semantic segmentation for autonomous vehicles, comparing an Input Optimisation based solution to existing approaches of finetuning the target model with augmented training data and an adversarially trained preprocessing model. We demonstrate that our approach can enable performance on such data comparable to that of a finetuned model, and subsequently that a combined approach, whereby an input optimization network is optimised to target a finetuned model, delivers superior performance to either method in isolation. Finally, we propose a joint optimisation approach, in which input optimization network and target model are trained simultaneously, which we demonstrate achieves significant further performance gains, particularly in challenging edge-case scenarios. We also demonstrate that our architecture can be reduced to a relatively compact size without a significant performance impact, potentially facilitating real time embedded applications.
translated by 谷歌翻译
Existing metrics for evaluating the quality of automatically generated questions such as BLEU, ROUGE, BERTScore, and BLEURT compare the reference and predicted questions, providing a high score when there is a considerable lexical overlap or semantic similarity between the candidate and the reference questions. This approach has two major shortcomings. First, we need expensive human-provided reference questions. Second, it penalises valid questions that may not have high lexical or semantic similarity to the reference questions. In this paper, we propose a new metric, RQUGE, based on the answerability of the candidate question given the context. The metric consists of a question-answering and a span scorer module, in which we use pre-trained models from the existing literature, and therefore, our metric can be used without further training. We show that RQUGE has a higher correlation with human judgment without relying on the reference question. RQUGE is shown to be significantly more robust to several adversarial corruptions. Additionally, we illustrate that we can significantly improve the performance of QA models on out-of-domain datasets by fine-tuning on the synthetic data generated by a question generation model and re-ranked by RQUGE.
translated by 谷歌翻译
The meaningful use of electronic health records (EHR) continues to progress in the digital era with clinical decision support systems augmented by artificial intelligence. A priority in improving provider experience is to overcome information overload and reduce the cognitive burden so fewer medical errors and cognitive biases are introduced during patient care. One major type of medical error is diagnostic error due to systematic or predictable errors in judgment that rely on heuristics. The potential for clinical natural language processing (cNLP) to model diagnostic reasoning in humans with forward reasoning from data to diagnosis and potentially reduce the cognitive burden and medical error has not been investigated. Existing tasks to advance the science in cNLP have largely focused on information extraction and named entity recognition through classification tasks. We introduce a novel suite of tasks coined as Diagnostic Reasoning Benchmarks, DR.BENCH, as a new benchmark for developing and evaluating cNLP models with clinical diagnostic reasoning ability. The suite includes six tasks from ten publicly available datasets addressing clinical text understanding, medical knowledge reasoning, and diagnosis generation. DR.BENCH is the first clinical suite of tasks designed to be a natural language generation framework to evaluate pre-trained language models. Experiments with state-of-the-art pre-trained generative language models using large general domain models and models that were continually trained on a medical corpus demonstrate opportunities for improvement when evaluated in DR. BENCH. We share DR. BENCH as a publicly available GitLab repository with a systematic approach to load and evaluate models for the cNLP community.
translated by 谷歌翻译
VQ(供应商资格)和IOQ(安装和操作资格)审核在仓库中实施,以确保在履行网络中翻转所有设备都符合质量标准。如果在短时间内进行许多检查,则可能会跳过审核检查。此外,探索性数据分析揭示了对相同资产进行类似检查的几个实例,从而重复了这项工作。在这项工作中,通过识别相似性和重复项,将自然语言处理和机器学习应用于仓库网络的大型清单数据集,并预测具有较高传递率的非批评性数据集。该研究建议ML分类器识别具有IOQ和VQ的高传递概率的检查,并将优先级分配给检查,以便在无法执行所有检查的时间时优先考虑。这项研究建议使用基于NLP的BLAZINGTEXT分类器以高速率进行清单,这可以降低检查的10%-37%,并大大降低成本。应用的算法超过了随机森林和神经网络分类器,并在90%的曲线下达到了一个区域。由于数据不平衡,使用F1分数对模型的准确性产生了积极影响,从8%提高到75%。此外,提出的重复检测过程确定要修剪的17%可能的冗余支票。
translated by 谷歌翻译
Covid-19影响了世界各地,尽管对爆发的错误信息的传播速度比病毒更快。错误的信息通过在线社交网络(OSN)传播,通常会误导人们遵循正确的医疗实践。特别是,OSN机器人一直是传播虚假信息和发起网络宣传的主要来源。现有工作忽略了机器人的存在,这些机器人在传播中充当催化剂,并专注于“帖子中共享的文章”而不是帖子(文本)内容中的假新闻检测。大多数关于错误信息检测的工作都使用手动标记的数据集,这些数据集很难扩展以构建其预测模型。在这项研究中,我们通过在Twitter数据集上使用经过验证的事实检查的陈述来标记数据来克服这一数据稀缺性挑战。此外,我们将文本功能与用户级功能(例如关注者计数和朋友计数)和推文级功能(例如Tweet中的提及,主题标签和URL)结合起来,以充当检测错误信息的其他指标。此外,我们分析了推文中机器人的存在,并表明机器人随着时间的流逝改变了其行为,并且在错误信息中最活跃。我们收集了1022万个Covid-19相关推文,并使用我们的注释模型来构建一个广泛的原始地面真实数据集以进行分类。我们利用各种机器学习模型来准确检测错误信息,我们的最佳分类模型达到了精度(82%),召回(96%)和假阳性率(3.58%)。此外,我们的机器人分析表明,机器人约为错误信息推文的10%。我们的方法可以实质性地暴露于虚假信息,从而改善了通过社交媒体平台传播的信息的可信度。
translated by 谷歌翻译
通过丘陵形成的现场制备是一种常用的造林治疗,通过机械地创建称为丘的植物植物物质来改善树木生长条件。在现场准备之后,下一个关键步骤是计算土墩的数量,该堆积的数量为森林经理提供了对给定种植园块所需的幼苗数量的精确估计。计算土墩数量通常是通过林业工人的手动现场调查来进行的,林业工人昂贵且容易出错,尤其是在大面积地区。为了解决这个问题,我们提出了一个新颖的框架,利用无人机成像和计算机视觉的进步,以准确估计种植块上的土墩数量。提出的框架包括两个主要组件。首先,我们利用基于深度学习算法的视觉识别方法来通过基于像素的分割来进行多个对象检测。这使得可见的土墩以及其他经常看到的物体(例如树木,碎屑,水的积累)的初步计数可用于表征种植块。其次,由于视觉识别可能会受到几个扰动因子(例如丘陵侵蚀,遮挡)的限制,因此我们采用机器学习估计功能,该功能可预测基于第一阶段提取的局部块属性的最终数量。我们在新的无人机数据集上评估了所提出的框架,该数据集代表具有不同功能的众多种植块。所提出的方法在相对计数精度方面优于手动计数方法,表明它在困难情况下具有有利和有效的潜力。
translated by 谷歌翻译
乳腺癌是全球女性中最常见的癌症。乳腺癌的早期诊断可以显着提高治疗效率。由于其可靠性,准确性和负担能力,计算机辅助诊断(CAD)系统被广泛采用。乳腺癌诊断有不同的成像技术。本文使用的最准确的是组织病理学。深度传输学习被用作提议的CAD系统功能提取器的主要思想。尽管在这项研究中已经测试了16个不同的预训练网络,但我们的主要重点是分类阶段。在所有测试的CNN中,具有剩余网络既有剩余网络既有剩余和启动网络的启发能力,均显示出最佳的特征提取能力。在分类阶段,Catboost,XGBOOST和LIGHTGBM的合奏提供了最佳的平均精度。 Breakhis数据集用于评估所提出的方法。 Breakhis在四个放大因素中包含7909个组织病理学图像(2,480个良性和5,429个恶性)。提出的方法的准确性(IRV2-CXL)使用70%的Breakhis数据集作为40倍,100X,200X和400X放大倍率的训练数据分别为96.82%,95.84%,97.01%和96.15%。大多数关于自动乳腺癌检测的研究都集中在特征提取上,这使我们参加了分类阶段。 IRV2-CXL由于使用软投票集合方法而显示出更好或可比较的结果,该合奏方法可以将Catboost,XGBoost和LightGBM的优势结合在一起。
translated by 谷歌翻译